103 research outputs found

    Introduction to Deep Learning: a practical point of view

    Full text link
    [EN] Deep Learning is currently used for numerous Artificial Intelligence applications, especially in the computer vision field for image classification and recognition tasks. Thanks to the increasing popularity, several tools have been created to take advantage of the potential benefits of this new technology. Although there is already a wide range of available benchmarks which offer evaluations of hardware architectures and Deep Learning software tools, these projects do not usually deal with specific performance aspects and they do not consider a complete set of popular models and datasets at the same time. Moreover, valuable metrics, such as GPU memory and power usage, are not typically measured and efficiently compared for a deeper analysis. This report aims to provide a complete and overall discussion about the recent progress of Deep Learning techniques, by evaluating various hardware platforms and by highlighting the key trends of the main Deep Learning frameworks. It will also review the most popular development tools that allow users to get started in this field and it will underline important benchmarking metrics and designs that should be used for the evaluation of the increasing number of Deep Learning projects. Furthermore, the data obtained by the comparison and the testing results will be shown in this work in order to assess the performance of the Deep Learning environments examined. The reader will also deepen the following points in the next pages: a general Deep Learning study to acquire the main state-of-the-art concepts of the subject; an attentive examination of benchmarking methods and standards for the evaluation of intelligent environments; the personal approach and the related project realised to carry out experiments and tests; interesting considerations extracted and discussed by the obtained results.[ES] Los sistemas de aprendizaje automático se han popularizado en los últimos tiempos gracias a la disponibilidad de aceleradores de cómputo tales como tarjetas gráficas (Graphics Processing Units, o GPUs). Estos sistemas de aprendizaje automático hacen uso de técnicas de inteligencia artificial conocidas como Deep Learning (aprendizaje profundo). Las técnicas de Deep Learning están basadas en las redes neuronales. Este tipo de redes, que necesitan de grandes potencias de cómputo, no son nuevas sino que ya se conocen desde hace bastantes años. En este sentido, el uso de estas redes neuronales se ha acelerado en los últimos años gracias al uso de GPUs. Estos sistemas de aprendizaje permiten a los ordenadores realizar tareas como reconocimiento automático de imágenes. En un plano más industrial, estos sistemas son, por ejemplo, una de las bases sobre las que se asientan los coches autónomos, tan presentes en las noticias durante los últimos meses. En resumen, el uso de la tecnología Deep Learning junto con el uso masivo de GPUs compatibles con CUDA ha posibilitado un salto cualitativo importante en muchas áreas. Sin embargo, dada su reciente aparición, y dado también que en los 4 años del grado hay que seleccionar muy cuidadosamente las enseñanzas a impartir, el plan de estudios actual no profundiza en estas tecnologías en la medida que a algunos alumnos les gustaría. En este TFG se propone al alumno llevar a cabo una exploración a nivel práctico de este tipo de tecnologías. Para ello, el alumno instalará y usará diversos entornos de Deep Learning y también hará una posterior evaluación de prestaciones en diferentes sistemas hardware basados en GPUs. Para ello usará un clúster equipado con GPUs de última generación. Este clúster cuenta con nodos que incluyen varias GPUs.Piccione, A. (2018). Introduction to Deep Learning: a practical point of view. http://hdl.handle.net/10251/107792TFG

    Physics-guided machine learning approaches to predict stability properties of fusion plasmas

    Get PDF
    Disruption prediction and avoidance is a critical need for next-step tokamaks such as the International Thermonuclear Experimental Reactor (ITER). The Disruption Event Characterization and Forecasting Code (DECAF) is a framework used to fully determine chains of events, such as magnetohydrodynamic (MHD) instabilities, that can lead to disruptions. In this thesis, several interpretable and physics-guided machine learning techniques (ML) to forecast the onset of resistive wall modes (RWM) in spherical tokamaks have been developed and incorporated into DECAF. The new DECAF model operates in a multi-step fashion by analysing the ideal stability properties and then by including kinetic effects on RWM stability. First, a random forest regressor (RFR) and a neural network (NN) ensemble are employed to reproduce the change in plasma potential energy without wall effects, δWno-wall, computed by the DCON ideal stability code for a large database of equilibria from the National Spherical Torus Experiment (NSTX). Moreover, outputs from the ML models are reduced and manipulated to get an estimation of the no-wall β limit, βno-wall, (where β is the ratio of plasma pressure to magnetic confinement field pressure). This exercise shows that the ML models are able to improve previous DECAF characterisation of stable and unstable equilibria and achieve accuracies within 85-88%, depending on the chosen level of interpretability. The physics guidance imposed on the NN objective function allowed for transferability outside the training domain by testing the algorithm on discharges from the Mega Ampere Spherical Tokamak (MAST). The estimated βno-wall and other important plasma characteristics, such as rotation, collisionality and low frequency MHD activity, are used as input to a customised random forest (RF) classifier to predict RWM stability for a set of human-labeled NSTX discharges. The proposed approach is real-time compatible and outperforms classical cost-sensitive methods by achieving a true positive rate (TPR) up to 90%, while also resulting in a threefold reduction in the training time. Finally, a model-agnostic method based on counterfactual explanations is developed in order to further understand the model's predictions. Good agreement is found between the model's decision and the rules imposed by physics expectation. These results also motivate the usage of counterfactuals to simulate real-time control by generating the βN levels that would keep the RWM stable

    Lezioni di Fisica con il cellulare e gli SMS

    Get PDF
    La mancanza di dotazioni tecnologiche nelle scuole diventa ancora più significativa quando è alta la componente di allievi che già soffrono dei limiti legati alla provenienza da situazioni economicamente disagiate e culturalmente deprivate. Per fare fronte a tali limiti, ho messo a punto nelle classi prime di un Istituto Professionale alcune attività volte a utilizzare risorse senza costi aggiuntivi per la scuola e per gli studenti. L’approccio seguito prevede l’uso dei telefoni cellulari come strumenti didattici, che si rivelano anche una utile interfaccia per accedere con gli SMS ad applicazioni gratuite disponibili in rete. In questo lavoro presento alcune pratiche di classe che ho adottato per l’introduzione delle grandezze fisiche

    The teaching of measurement from epistemology to computational thinking

    Get PDF
    In this work the concept of measurement and the physical quantities were chosen to develop an inclusive didactical practice, which is based on the contemporary epistemological research and the rigor of international standards, and therefore cognitively demanding. This approach, together with the support of methods coming from the computer programming, supported the creation of a path that is widely accessible by students, as well as a correct definition of measurement. Through an appropriate didactical transposition the tools here described can be useful to raise the level of the educational practices in the classroom, e.g. to promote computational thinking. La didattica della misura dall’epistemologia al pensiero computazionaleIn questo lavoro sono stati scelti la misura e le grandezze fisiche come nucleo fondante intorno ai quali sviluppare una attività didattica inclusiva e al tempo stesso esigente dal punto cognitivo, perché basata sulla ricerca epistemologica contemporanea e sul rigore degli standard internazionali. Tale impostazione, unitamente al supporto di metodologie provenienti dalla programmazione informatica, ha consentito lo sviluppo di un percorso che presenta sia un’ampia accessibilità da parte degli studenti, sia una corretta definizione del concetto di misura. Attraverso questo ripensamento dei contenuti, le risorse proposte possono diventare strumenti per elevare il livello dell’offerta didattica e valorizzare le eccellenze, anche attraverso la promozione del pensiero computazionale

    Neural network determination of parton distributions: the nonsinglet case

    Get PDF
    We provide a determination of the isotriplet quark distribution from available deep--inelastic data using neural networks. We give a general introduction to the neural network approach to parton distributions, which provides a solution to the problem of constructing a faithful and unbiased probability distribution of parton densities based on available experimental information. We discuss in detail the techniques which are necessary in order to construct a Monte Carlo representation of the data, to construct and evolve neural parton distributions, and to train them in such a way that the correct statistical features of the data are reproduced. We present the results of the application of this method to the determination of the nonsinglet quark distribution up to next--to--next--to--leading order, and compare them with those obtained using other approaches.Comment: 46 pages, 18 figures, LaTeX with JHEP3 clas

    Is Your Smartphone Really Safe? A Wake-up Call on Android Antivirus Software Effectiveness

    Get PDF
    A decade ago, researchers raised severe concerns about Android smartphones’ security by extensively assessing and recognising the limitations of Android antivirus software. Considering the significant increase in the economic role of smartphones in recent years, we would expect that security measures are significantly improved by now. To test this assumption, we conducted a relatively extensive study to evaluate the effectiveness of off-the-shelf antivirus software in detecting malicious applications injected into legitimate Android applications. We specifically repackaged seven widely used Android applications with 100 obfuscated malware instances. We submitted the 700 samples to the VirusTotal web portal, testing the effectiveness of the over 70 free and commercial antiviruses available in detecting them. For the obfuscation part, we intentionally employed publicly available tools that could be used by “just” a tech-savvy adversary. We used a combination of well-known and novel (but still simple) obfuscation techniques. Surprisingly (or perhaps unsurprisingly?), our findings indicate that almost 76% of the samples went utterly undetected. Even when our samples were detected, this occurred for a handful (never more than 4) of Android antivirus software available on VirusTotal. This lack of awareness of the effectiveness of Android antivirus is critical because the false sense of security given by antivirus software could prompt users to install applications from untrusted sources, allowing attackers to install a persistent threat within another application easily

    Magnetoencephalography in Stroke Recovery and Rehabilitation

    Get PDF
    Magnetoencephalography (MEG) is a non-invasive neurophysiological technique used to study the cerebral cortex. Currently, MEG is mainly used clinically to localize epileptic foci and eloquent brain areas in order to avoid damage during neurosurgery. MEG might, however, also be of help in monitoring stroke recovery and rehabilitation. This review focuses on experimental use of MEG in neurorehabilitation. MEG has been employed to detect early modifications in neuroplasticity and connectivity, but there is insufficient evidence as to whether these methods are sensitive enough to be used as a clinical diagnostic test. MEG has also been exploited to derive the relationship between brain activity and movement kinematics for a motor-based brain-computer interface. In the current body of experimental research, MEG appears to be a powerful tool in neurorehabilitation, but it is necessary to produce new data to confirm its clinical utility
    corecore